Clicky

Lockheed Martin Develops System to Identify and Counter Online “Disinformation,” Prototyped by DARPA

Military-backed AI tools to combat "disinformation" raise questions about accuracy, transparency, and the potential for misuse in political arenas.
Comic-style illustration of a jet fighter plane flying through a vibrant explosion with speech bubbles overhead.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Various military units around the world (notably in the UK during the pandemic) have been getting involved in what are ultimately, due to the goal (censorship) and participants (military) destined to become controversial, if not unlawful efforts.

But there doesn’t seem to be a lot of desire to learn from others’ mistakes. The temptation to bring the defense system into the political “war on disinformation” arena seems to be too strong to resist.

Right now in the US, Lockheed Martin is close to completing a prototype that will analyze media to “detect and defeat disinformation.”

And by media, those commissioning the tool – called the Semantic Forensics (SemaFor) program – mean everything: news, the internet, and even entertainment media. Text, audio, images, and video that are part of what’s considered “large-scale automated disinformation attacks” are supposed to be detected and labeled as false by the tool.

The development process is almost over, and the prototype is used by the US Defense Department’s Defense Advanced Research Projects Agency (DARPA).

The total value of the program, awarded by the Air Force Research Laboratory Information Directorate (acting for DARPA) to Lockheed Martin comes to $37.2 million, Military and Aerospace Electronics reported.

Reports note that while past statistical detection methods have been “successful” they are now seen as “insufficient” for media disinformation detection. This is why looking into “semantic inconsistency” is preferred.

There is a rather curious example given of how “mismatched earrings” can be a giveaway that a face is not real but generated via a GAN – a generative adversarial network.

(GANs are a machine learning method.)

And SemaFor’s purpose is to analyze media with semantic tech algorithms, to hand down the verdict of whether the content is authentic or false.

There’s more: proving such activities come from a specific actor has been near impossible so far, but there’s a “mental workaround” that those behind the project seem happy with: infer, rather than identify. So – just like before?

“Attribution algorithms” are what’s needed for this, and there’s more guesswork presented as reliable tech – namely, “characterization algorithms” to decide not only if media is manipulated or generated, but also if the purpose is malicious.

Now, the only thing left to find out – how the algorithms are written.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Share this post

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.